Novel topological spin textures, such as magnetic skyrmions, benefit from their inherent stability, acting as the ground state in several magnetic systems. In the current study of atomic monolayer magnetic materials, reasonable initial guesses are still needed to search for those magnetic patterns. This situation underlines the need to develop a more effective way to identify the ground states. To solve this problem, in this work, we propose a genetic-tunneling-driven variance-controlled optimization approach, which combines a local energy minimizer back-end and a metaheuristic global searching front-end. This algorithm is an effective optimization solution for searching for magnetic ground states at extremely low temperatures and is also robust for finding low-energy degenerated states at finite temperatures. We demonstrate here the success of this method in searching for magnetic ground states of 2D monolayer systems with both artificial and calculated interactions from density functional theory. It is also worth noting that the inherent concurrent property of this algorithm can significantly decrease the execution time. In conclusion, our proposed method builds a useful tool for low-dimensional magnetic system energy optimization.
translated by 谷歌翻译
遥感图像变化检测在灾难评估和城市规划中至关重要。主流方法是使用编码器模型来检测两个输入图像的更改区域。由于遥感图像的变化内容具有广泛范围和多样性的特征,因此有必要通过增加注意机制来提高网络的检测准确性,这通常包括:挤压和激发块,非本地和非本地块和卷积阻止注意模块等。这些方法考虑了通道或通道内部不同位置特征的重要性,但无法感知输入图像之间的差异。在本文中,我们提出了一个新颖的图像差异注意网络(IDAN)。在图像预处理阶段,我们使用预训练模型来提取两个输入图像之间的特征差异,以获得特征差异图(FD-MAP)和用于边缘检测的Chany以获得边缘差异图(ED-MAP) 。在图像特征提取阶段中,FD-MAP和ED-MAP分别输入了特征差异注意模块和边缘补偿模块,以优化IDAN提取的功能。最后,通过特征差异操作获得了变化检测结果。 Idan全面考虑了图像的区域和边缘特征的差异,从而优化了提取的图像特征。实验结果表明,与WHU数据集和Levir-CD数据集的基线模型相比,IDAN的F1得分分别提高了1.62%和1.98%。
translated by 谷歌翻译
我们的目标是在新的成像条件下(例如,户外)在新的成像条件下(例如,在非常不同的条件下拍摄的图像(例如室内)时(室内),在新成像条件(例如室外)下(例如室外),在新的成像条件下(例如室外)进行分割的像素级掩盖的性能。在现实世界中,重要的是在各种成像条件下进行培训的模型都必须运行。但是,它们被现有标记的手数据集涵盖的变化是有限的。因此,有必要调整在标记的图像(源)上训练的模型,以使其具有看不见的成像条件的未标记图像(目标)。尽管已经为这两项任务开发了自我训练域的适应方法(即以自我监督的方式学习以自我监督的方式学习),但当目标图像的预测嘈杂时,它们的训练可能会降低性能。为了避免这种情况,至关重要的是,在自我训练过程中,为嘈杂的预测分配了较低的重要性(置信度)。在本文中,我们建议利用两个预测的差异来估计目标图像对这两个任务的信心。这些预测来自两个单独的网络,它们的差异有助于确定嘈杂的预测。为了将我们提出的信心估计纳入自我训练中,我们提出了一个教师学生的框架,在该框架中,两个网络(教师)为网络(学生)提供自我培训的监督,并通过知识蒸馏从学生那里学习教师。我们的实验表明,在具有不同照明,握住对象,背景和摄像机观点的适应设置中,其优于最先进的方法。与最新的对抗适应方法相比,我们的方法在HO3D上的多任务得分提高了4%。我们还验证了我们在室外成像条件下快速变化的Ego4d的方法。
translated by 谷歌翻译
理解手对象交互的关键组成部分是识别活动对象的能力 - 由人类手动操纵的对象。为了准确定位活动对象,任何方法都必须使用由每个图像像素编码的信息,例如它是否属于手,对象或背景。要利用每个像素作为确定活动对象的边界框的证据,我们提出了一种像素明智的投票功能。我们的Pixel-Wise投票函数将初始边界框作为输入,并生成作为输出的活动对象的改进边界框。投票函数设计成使得输入边界盒内部的每个像素用于改进的边界框,并且选择具有大多数投票的框作为输出。我们调用了在投票函数中生成的边界框的集合,关键框字段,因为它表征了与当前边界框中的关系定义的边界框的字段。虽然我们的投票功能能够改进活动对象的边界框,但一轮投票通常不足以准确地本地化活动对象。因此,我们反复应用投票函数来顺序地改善边界框的位置。然而,由于已知重复应用一步预测器(即,使用我们的投票函数的自动回归处理)可以导致数据分配换档,我们使用强化学习(RL)缓解此问题。我们采用标准RL来学习投票功能参数,并表明它通过标准的监督学习方法提供了有意义的改进。我们在两个大型数据集上执行实验:100欧元和麦克巴诺,分别在最先进的情况下提高8%和30%的AP50性能。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译